A well-crafted law can demystify artificial intelligence - A well-crafted law can demystify artificial intelligence - Netiks | Latest News
X
GO
30

A well-crafted law can demystify artificial intelligence

posted on

AI and the law

The 2018 Cambridge Analytica scandal revealed how algorithms could influence the US presidential election by exploiting Facebook users’ personal data to influence their votes with targeted content. This case highlights the legal risks associated with artificial intelligence (AI).
 

Should we blame algorithms?

Algorithms utilize personal data that is collected pertaining to the situation, experience, or scenario they simulate. Data collection is crucial for machine learning and can introduce "bias" that could lead to "technological discrimination”. In 2019, Apple Card granted men credit lines up to twenty times higher than women even if they showed the same tax conditions and credit history. Darker skin tones were not properly recognized by facial recognition systems, illustrating another instance of discrimination resulting from insufficient diversity in the training data.

Nevertheless, algorithms cannot be held "guilty" as they are neither physical nor legal entities. The responsibility lies with individuals who make decisions regarding their use, implementation, and the resulting outcome.

In the USA and Europe, the justice system itself relies on algorithms. In Los Angeles, the PredPol algorithm was "trained" on past crime data to help judges assess the opportunity of granting bail. The system was initiated in 2012 but discontinued in 2020 due to the high risk of racial discrimination in its algorithmic suggestions.

In the absence of "well-informed" judgments, algorithms can compromise the independence of justice and harm innocent individuals.

How can we reconcile the remarkable benefits of AI with the need to mitigate its risks?


The implementation of algorithms in AI

 

Can the judicial system "audit" the implementation of algorithms?

Logical rules that make up so-called "explicit" algorithms are namely explicitly defined, such as in "decision trees" where each node includes a condition in the form of a question, whereby each possible answer determines the choice for the next step (e.g., "if the client is under thirty, then...").

Implicit algorithms, such as the ones used in Machine Learning and Deep Learning Neural Networks, are difficult, if not impossible, to "audit." Nonetheless, these algorithms are flourishing and prove great potential in various fields, such as medicine, security, finance, and robotics.

Surprisingly, there is a method that allows algorithms to self-explain: during execution, the algorithm can provide insights into its own functioning.

The term "explainability by design" refers to the methods applied while designing the algorithms, to anticipate the need to explain the way they work. Such methods define the means to achieve that.

By providing information that enhances our understanding of their preliminary "decisions", algorithms can assist humans in making final decisions across various domains.

This ability is crucial for providing elements of explainability and transparency, especially in the context of justice. It is also imperative to undertake human responsibility and acquire the necessary skills to apply a critical examination of these tools in the interest of citizens.


The implementation of algorithms in AI

 

Legal perspectives

Instead of directly regulating algorithms, the law should demand technical and scientific governance by companies and teams to warrant a minimal explanation of technologies, ensure the absence of biases in both data and trained algorithms, safeguard inclusive data collection, and facilitate the responsible use of these tools.

Scientist Cathy O'Neil suggests the creation of an evaluation and regulation agency, inspired by the Food and Drug Administration (FDA), to approve the widespread deployment of algorithms in specific domains and applications.

The approval criteria of this "Data and Algorithmic Administration" could be developed by teams composed of scientists, data and algorithm experts, industrial experts, academic researchers, and working closely with legal professionals. However, this approach might slow down the deployment of algorithms and potentially reduce the innovation capacity of technological actors.

It is also crucial to draft legislation as precisely as possible to minimize the risks of technological gaps and avoid detrimental "technological voids" the same way legislation strives to prevent "legal voids."

The prospect of establishing legislation on AI is promising, particularly if we consider the effectiveness of the General Data Protection Regulation (GDPR), which protects personal data and fosters trust among the citizens of 27 European Union member states. Its equivalent in California, the California Consumer Privacy Act (CCPA), may soon be extended to the entire United States.

Lastly, it is the responsibility of users to understand, at least in general terms, the functioning of AI tools, especially when the law grants them the right to demand clear and unambiguous explanations from technology actors.

By undertaking these efforts, technology can cater to the needs of an "informed user" instead of confusing a "captivated user".
 

Joe Abi Aad


Credit: The ideas expressed in this article are mostly based on the revealing book "Les algorithmes font-ils la loi?" written by Aurélie Jean.
 

| View Count: (1088) | Return

Post a Comment